67 research outputs found

    Paper Session II-B - Simulating Shuttle and Derivative Vehicle Processing at Kennedy Space Center

    Get PDF
    Rockwell International Space Systems Division is teamed with the University of Central Florida on a research project to develop an automated simulation system to model ground processing scenarios for the Shuttle and Shuttle-derived vehicles. This simulation system is necessary to evaluate launch site facilities requirements and estimate life-cycle costs of future space programs. This paper presents the results of initial simulation modeling of the orbiter processing critical path at Kennedy Space Center (KSC). An approach is presented for the planned capabilities to simulate mixed fleet processing and to perform sensitivity, capacity, cost, and risk analyses. Potential expert system applications for the simulation system are presented, such as a resource allocation tool for standdown periods or a long-range scheduling tool for future programs like the Space Exploration Initiative. The simulation model will be developed using the object-oriented languages MOD SIM II and C+ + . This model is different than other software tools currently used for planning at KSC in that it is stochastic rather than deterministic. A deterministic model assumes all parameters of the model are known constants. A stochastic system defines the operations process using an indexed collection of random variables. The modeling system will be expandable using object-oriented inheritance techniques in which facilities and vehicles are modeled as templates. This system is different from other planning systems used at KSC in that supplemental vehicle and/or facility data can be introduced during program execution. This technique allows effective modeling of dynamic launch site environments for future programs

    A dynamical adaptive resonance architecture

    Get PDF
    A set of nonlinear differential equations that describe the dynamics of the ART1 model are presented, along with the motivation for their use. These equations are extensions of those developed by Carpenter and Grossberg (1987). It is shown how these differential equations allow the ART1 model to be realized as a collective nonlinear dynamical system. Specifically, we present an ART1-based neural network model whose description requires no external control features. That is, the dynamics of the model are completely determined by the set of coupled differential equations that comprise the model. It is shown analytically how the parameters of this model can be selected so as to guarantee a behavior equivalent to that of ART1 in both fast and slow learning scenarios. Simulations are performed in which the trajectories of node and weight activities are determined using numerical approximation techniques

    Characterizing Open Addressing Hash Functions

    Get PDF
    Abstract In [1], we showed that different open addressing hash functions perform differently when the data elements are not uniformly distributed. So, it is tempting to attribute their difference to some mechanism governing the behavior of the hash functions. In this paper, a simple method of characterizing open addressing hash functions is presented. We showed that, indeed, the nature of data spreading ability characterizes the behavior of different open addressing hash functions. We measured and analyzed the spreading speed of a cluster of data elements under different open addressing hash functions. Our experimental results and theoretical analysis showed that different hash functions have different abilities of spreading out clustered data elements. The hash function, which spreads out clustered data elements over the whole table space more uniformly and faster, has better performance when it is applied to clustered data. Experimental results are presented to support our claims, which is followed by some theoretic analysis. Hash function families In this paper we present hash functions in the form of nonlinear dynamical systems. So, first we derive dynamical system expressions for different hashing families. Linear and quadratic hashing The family of linear hash functions can be expressed as where h is an ordinary hash function, and i = 0, 1, . . . is the probe number. This technique is known as linear hashing because the argument of the modulus operator is linearly dependent on the probe number. In general, c 1 needs to be chosen so that it is relatively prime to m if all slots in the hash table are to be examined by the probe sequence. In order to construct an equivalent transformation for equation and using the fact that for a, b, m ∈ IR, we may rewrite equation Note that the dependence on k is specified in the initial condition H(k, 0). Quadratic hashing is a simple extension of linear hashing that makes the probe sequence nonlinearly dependent on the probe number. For any ordinary hash function h, the family of quadratic hash functions is given by where c 1 and c 2 are positive constants. Once again, the specific values chosen for the constants are critical to the performance of this method (see To obtain a recurrence relation solution to equation (3) we note that 2

    An application of gradient-like dynamics to neural networks

    Get PDF
    This paper reviews a formalism that enables the dynamics of a broad class of neural networks to be understood. This formalism is then applied to a specific network and the predicted and simulated behavior of the system are compared. The purpose of this work is to utilise a model of the dynamics that also describes the phase space behavior and structural stability of the system. This is achieved by writing the general equations of the neural network dynamics as a gradient-like system. In this paper it is demonstrated that a network with additive activation dynamics and Hebbian weight update dynamics can be expressed as a gradient-like system. An example of an S-layer network with feedback between adjacent layers is presented. It is shown that the process of weight learning is stable in this network when the learned weights are symmetric. Furthermore, the weight learning process is stable when the learned weights are asymmetric, provided that the activation is computed using only the symmetric part of the weights

    An exponential open hashing function based on dynamical systems theory

    Get PDF
    In this paper an efficient open addressing hash function called exponential hashing is developed using concepts from dynamical systems theory and number theory. A comparison of exponential hashing versus a widely used double hash function is performed using an analysis based on Lyapunov exponents and entropy. Proofs of optimal table parameter choices are provided for a number of hash functions. We also demonstrate experimentally that exponential hashing nearly matches the performance of an optimal double hash function for uniform data distributions and performs significantly better for nonuniform data distributions. We show that exponential hashing exhibits a higher integer Lyapunov exponent and entropy than double hashing for initial data probes which offers one explanation for its improved performance on nonuniform data distributions

    Performance measures for image watermarking schemes

    Get PDF
    In this paper we propose performance measures and provide a description of the application of digital watermarking for use in copyright protection of digital images. The watermarking process we consider involves embedding visually imperceptible data in a digital image in such a way that it is difficult to detect or remove, unless one possesses specific secret information. The major processes involved in the watermarking processes are considered, including insertion, attack, and detection/extraction. Generic features of these processes are discussed, along with open issues, and areas where experimentation is likely to prove useful

    fMRI Pattern Classification using Neuroanatomically Constrained Boosting

    Get PDF
    Pattern classification in functional MRI (fMRI) is a novel methodology to automatically identify differences in distributed neural substrates resulting from cognitive tasks. Reliable pattern classification is challenging due to the high dimensionality of fMRI data, the small number of available data sets, interindividual differences, and dependence on the acquisition methodology. Thus, most previous fMRI classification methods were applied in individual subjects. In this study, we developed a novel approach to improve multiclass classification across groups of subjects, field strengths, and fMRI methods. Spatially normalized activation maps were segmented into functional areas using a neuroanatomical atlas and each map was classified separately using local classifiers. A single multiclass output was applied using a weighted aggregation of the classifier’s outputs. An Adaboost technique was applied, modified to find the optimal aggregation of a set of spatially distributed classifiers. This Adaboost combined the regionspecific classifiers to achieve improved classification accuracy with respect to conventional techniques. Multiclass classification accuracy was assessed in an fMRI group study with interleaved motor, visual, auditory, and cognitive task design. Data were acquired across 18 subjects at different field strengths (1.5 T, 4 T), with different pulse sequence parameters (voxel size and readout bandwidth). Misclassification rates of the boosted classifier were between 3.5% and 10%, whereas for the single classifier, these were between 15% and 23%, suggesting that the boosted classifier provides a better generalization ability together with better robustness. The high computational speed of boosting classification makes it attractive for real-time fMRI to facilitate online interpretation of dynamically changing activation patternsPublicad
    • …
    corecore